Face Generation

In this project, you'll use generative adversarial networks to generate new images of faces.

Get the Data

You'll be using two datasets in this project:

  • MNIST
  • CelebA

Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.

If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".

In [1]:
data_dir = './data'

# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'


"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper

helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
Found mnist Data
Found celeba Data

Explore the Data

MNIST

As you're aware, the MNIST dataset contains images of handwritten digits. You can view the first number of examples by changing show_n_images.

In [2]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot

mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
Out[2]:
<matplotlib.image.AxesImage at 0x7f3f5c82f550>

CelebA

The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.

In [3]:
show_n_images = 25

"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Out[3]:
<matplotlib.image.AxesImage at 0x7f3f5c7240b8>

Preprocess the Data

Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.

The MNIST images are black and white images with a single color channel while the CelebA images have 3 color channels (RGB color channel).

Build the Neural Network

You'll build the components necessary to build a GANs by implementing the following functions below:

  • model_inputs
  • discriminator
  • generator
  • model_loss
  • model_opt
  • train

Check the Version of TensorFlow and Access to GPU

This will check to make sure you have the correct version of TensorFlow and access to a GPU

In [4]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf

# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer.  You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))

# Check for a GPU
if not tf.test.gpu_device_name():
    warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
    print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
/usr/lib64/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6
  return f(*args, **kwds)
TensorFlow Version: 1.4.0
Default GPU Device: /device:GPU:0

Input

Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:

  • Real input images placeholder with rank 4 using image_width, image_height, and image_channels.
  • Z input placeholder with rank 2 using z_dim.
  • Learning rate placeholder with rank 0.

Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)

In [5]:
import problem_unittests as tests

def model_inputs(image_width, image_height, image_channels, z_dim):
    """
    Create the model inputs
    :param image_width: The input image width
    :param image_height: The input image height
    :param image_channels: The number of image channels
    :param z_dim: The dimension of Z
    :return: Tuple of (tensor of real input images, tensor of z data, learning rate)
    """
    # TODO: Implement Function
    real_input = tf.placeholder(dtype = tf.float32, shape = (None, image_width, image_height, image_channels))
    z_input = tf.placeholder(dtype = tf.float32, shape = (None, z_dim))
    learning_rate = tf.placeholder(dtype = tf.float32, shape = ())

    return real_input, z_input, learning_rate


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
Tests Passed

Discriminator

Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).

In [78]:
drop_rate = 0.5
def discriminator(images, reuse=False):
    """
    Create the discriminator network
    :param images: Tensor of input image(s)
    :param reuse: Boolean if the weights should be reused
    :return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
    """
    # TODO: Implement Function
    alpha = 0.1
    keep_prob = 0.9
    is_train = not reuse
    out_channel_dim = images.get_shape().as_list()[-1]
    with tf.variable_scope('discriminator', reuse = reuse):
        
        
        conv1 = tf.layers.conv2d(images, filters = 64 , kernel_initializer= tf.contrib.layers.xavier_initializer(),\
                                 kernel_size = 5, strides = 2, padding = "SAME")
        conv1 = tf.maximum(alpha*conv1, conv1)
        
        
        conv2 = tf.layers.conv2d(conv1, filters = 128 , kernel_initializer= tf.contrib.layers.xavier_initializer(),\
                                 kernel_size = 5, strides = 2, padding = "SAME")
        conv2 = tf.layers.batch_normalization(conv2, training = True)
        conv2 = tf.maximum(alpha*conv2, conv2)
        conv2 = tf.nn.dropout(conv2, keep_prob = keep_prob)
        
        conv3 = tf.layers.conv2d(conv2, filters = 256 , kernel_initializer= tf.contrib.layers.xavier_initializer(),\
                                 kernel_size = 5, strides = 2, padding = "SAME")
        conv3 = tf.layers.batch_normalization(conv3, training = True)
        conv3 = tf.maximum(alpha*conv3, conv3)
        conv3 = tf.nn.dropout(conv3, keep_prob = keep_prob)
        
        a,b,c,d = conv3.get_shape().as_list()
        conv3 = tf.reshape(conv3, shape = (tf.shape(conv3)[0],b*c*d))
        
        logits = tf.layers.dense(conv3, 1, activation = None)
        probabilities = tf.sigmoid(logits)

    return probabilities, logits


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)
Tests Passed

Generator

Implement generator to generate an image using z. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.

In [85]:
import numpy as np
def generator(z, out_channel_dim, is_train=True):
    alpha = 0.1
    z_shape = z.get_shape().as_list()
    batch_size = tf.shape(z)[0]
    keep_prob = 0.9
    
    """
    Create the generator network
    :param z: Input z
    :param out_channel_dim: The number of channels in the output image
    :param is_train: Boolean if generator is being used for training
    :return: The tensor output of the generator
    """
    # TODO: Implement Function
    with tf.variable_scope('generator', reuse= not is_train):
        # First fully connected layer
        x = tf.layers.dense(inputs = z, units = 7*7*512)
        x = tf.reshape(x, (tf.shape(x)[0],7,7,512))
        x = tf.layers.batch_normalization(x, training = is_train)
        x = tf.maximum(alpha*x, x)
        
        conv1 = tf.layers.conv2d_transpose(x, filters = 256 , kernel_initializer= tf.contrib.layers.xavier_initializer_conv2d(),\
                                           kernel_size = (5,5),strides = (2,2), padding = 'SAME')
        conv1 = tf.layers.batch_normalization(conv1, training= is_train)
        conv1 = tf.maximum(alpha*conv1, conv1)
        conv1 = tf.nn.dropout(conv1, keep_prob=keep_prob)
        
        conv2 = tf.layers.conv2d_transpose(conv1, filters = 128 , kernel_initializer= tf.contrib.layers.xavier_initializer_conv2d(),\
                                           kernel_size = (5,5),strides = (2,2), padding = 'SAME')
        #conv2 = tf.layers.dropout(conv2, rate = drop_rate, training = is_train)
        conv2 = tf.layers.batch_normalization(conv2, training = is_train)
        conv2 = tf.maximum(alpha*conv2, conv2)
        conv2 = tf.nn.dropout(conv2, keep_prob=keep_prob)
        
        conv3 = tf.layers.conv2d_transpose(conv2, filters = 64 , kernel_initializer = tf.contrib.layers.xavier_initializer_conv2d(),\
                                           kernel_size = (5,5),strides = (1,1), padding = 'SAME')
        conv3 = tf.layers.batch_normalization(conv3, training = is_train)
        conv3 = tf.maximum(alpha*conv3, conv3)
        conv3 = tf.nn.dropout(conv3, keep_prob=keep_prob)
        
        
        conv4 = tf.layers.conv2d_transpose(conv3, filters = out_channel_dim , kernel_initializer = tf.contrib.layers.xavier_initializer_conv2d(),\
                                           kernel_size = (5,5),strides = (1,1), padding = 'SAME')
        
        #conv4 = tf.nn.tanh(conv4)
        # Output layer, 32x32x3
        logits = conv4
        
        out = tf.tanh(logits)

        return out


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)
Tests Passed

Loss

Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:

  • discriminator(images, reuse=False)
  • generator(z, out_channel_dim, is_train=True)
In [86]:
label_smooth = 0.1
def model_loss(input_real, input_z, out_channel_dim):
    """
    Get the loss for the discriminator and generator
    :param input_real: Images from the real dataset
    :param input_z: Z input
    :param out_channel_dim: The number of channels in the output image
    :return: A tuple of (discriminator loss, generator loss)
    """
    # TODO: Implement Function
    
    
    g_model = generator(input_z, out_channel_dim, is_train = True)
    d_model_real, d_logits_real = discriminator(input_real)
    d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)

    d_loss_real = tf.reduce_mean(
        tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)*(1-label_smooth)))
    d_loss_fake = tf.reduce_mean(
        tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
    g_loss = tf.reduce_mean(
        tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))

    d_loss = d_loss_real + d_loss_fake

    return d_loss, g_loss
    #return None, None


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_loss(model_loss)
Tests Passed

Optimization

Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).

In [87]:
def model_opt(d_loss, g_loss, learning_rate, beta1):
    """
    Get optimization operations
    :param d_loss: Discriminator loss Tensor
    :param g_loss: Generator loss Tensor
    :param learning_rate: Learning Rate Placeholder
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :return: A tuple of (discriminator training operation, generator training operation)
    """
    # TODO: Implement Function
    # Optimizers

    # Get the trainable_variables, split into G and D parts
    t_vars = tf.trainable_variables()
    g_vars = [x for x in t_vars if 'generator' in str(x)]
    d_vars = [x for x in t_vars if 'discriminator' in str(x)]
    #print(g_vars,d_vars)
    
    ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
    g_updates = [x for x in ops if x.name.startswith('generator')]
    
    with tf.control_dependencies(ops):
        d_train_opt = tf.train.AdamOptimizer(learning_rate = learning_rate, beta1 = beta1).minimize(d_loss, var_list = d_vars)
        g_train_opt = tf.train.AdamOptimizer(learning_rate = learning_rate, beta1 = beta1).minimize(g_loss, var_list = g_vars)
    
        return d_train_opt, g_train_opt


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)
Tests Passed

Neural Network Training

Show Output

Use this function to show the current output of the generator during training. It will help you determine how well the GANs is training.

In [88]:
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np

def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
    """
    Show example output for the generator
    :param sess: TensorFlow session
    :param n_images: Number of Images to display
    :param input_z: Input Z Tensor
    :param out_channel_dim: The number of channels in the output image
    :param image_mode: The mode to use for images ("RGB" or "L")
    """
    cmap = None if image_mode == 'RGB' else 'gray'
    z_dim = input_z.get_shape().as_list()[-1]
    example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])

    samples = sess.run(
        generator(input_z, out_channel_dim, False),
        feed_dict={input_z: example_z})

    images_grid = helper.images_square_grid(samples, image_mode)
    pyplot.imshow(images_grid, cmap=cmap)
    pyplot.show()

Train

Implement train to build and train the GANs. Use the following functions you implemented:

  • model_inputs(image_width, image_height, image_channels, z_dim)
  • model_loss(input_real, input_z, out_channel_dim)
  • model_opt(d_loss, g_loss, learning_rate, beta1)

Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.

In [89]:
import datetime as dt
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
    """
    Train the GAN
    :param epoch_count: Number of epochs
    :param batch_size: Batch Size
    :param z_dim: Z dimension
    :param learning_rate: Learning Rate
    :param beta1: The exponential decay rate for the 1st moment in the optimizer
    :param get_batches: Function to get batches
    :param data_shape: Shape of the data
    :param data_image_mode: The image mode to use for images ("RGB" or "L")
    """
    # TODO: Build Model
    
    input_real, input_z, lr = model_inputs(*data_shape[1:],z_dim)
    d_loss, g_loss = model_loss(input_real, input_z, data_shape[-1])
    d_train_opt, g_train_opt = model_opt(d_loss, g_loss, lr, beta1)
    
    losses = []
    with tf.Session() as sess:
        
        sess.run(tf.global_variables_initializer())
        for epoch_i in range(epoch_count):
            print(epoch_i)
            batch_i = 0
            for batch_images in get_batches(batch_size):
                
                batch_i += 1

                # TODO: Train Model
                    
            
                # Sample random noise for G
                batch_z = np.random.uniform(-1, 1, size=(batch_size, z_dim))
                
                
                
                
                #_ = sess.run(model_opt, feed_dict = {d_loss: d_loss, g_loss: g_loss})
                # Run optimizers
                _ = sess.run(d_train_opt, feed_dict={input_real: 2*batch_images, input_z: batch_z, lr: learning_rate})
                _ = sess.run(g_train_opt, feed_dict={input_real: 2*batch_images,input_z: batch_z, lr: learning_rate})
                
                if batch_i % 100 == 0:
                    

                    train_loss_d = d_loss.eval({input_z: batch_z, input_real: 2*batch_images})
                    train_loss_g = g_loss.eval({input_real: 2*batch_images,input_z: batch_z})

                    print("Time {}...".format(dt.datetime.now()),
                          "Epoch {}/{}...".format(epoch_i+1, epoch_count),
                          "Batch {}...".format(batch_i+1),
                          "Discriminator Loss: {:.4f}...".format(train_loss_d),
                          "Generator Loss: {:.4f}".format(train_loss_g))    
                    # Save losses to view after training
                    losses.append((train_loss_d, train_loss_g))
                    
                #if batch_i % 100 == 0:
                    show_generator_output(sess, n_images = 16, input_z = input_z, \
                                          out_channel_dim = data_shape[-1], image_mode = data_image_mode)

        # Sample from generator as we're training for viewing afterwards
            
            #saver.save(sess, './checkpoints/generator.ckpt')

# Save training generator samples
#with open('train_samples.pkl', 'wb') as f:
#    pkl.dump(samples, f)
                
                

MNIST

Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.

In [93]:
batch_size = 128
z_dim = 100
learning_rate = 1e-3
beta1 = 0.1


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 2

mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
          mnist_dataset.shape, mnist_dataset.image_mode)
0
Time 2017-11-25 20:10:16.061997... Epoch 1/2... Batch 101... Discriminator Loss: 1.4860... Generator Loss: 1.2211
Time 2017-11-25 20:10:34.800102... Epoch 1/2... Batch 201... Discriminator Loss: 1.5032... Generator Loss: 0.5104
Time 2017-11-25 20:10:53.081216... Epoch 1/2... Batch 301... Discriminator Loss: 1.4103... Generator Loss: 1.0336
Time 2017-11-25 20:11:11.137114... Epoch 1/2... Batch 401... Discriminator Loss: 1.4084... Generator Loss: 1.2841
1
Time 2017-11-25 20:11:42.017409... Epoch 2/2... Batch 101... Discriminator Loss: 1.4988... Generator Loss: 0.4660
Time 2017-11-25 20:12:00.518738... Epoch 2/2... Batch 201... Discriminator Loss: 1.3546... Generator Loss: 1.2128
Time 2017-11-25 20:12:19.120234... Epoch 2/2... Batch 301... Discriminator Loss: 1.3251... Generator Loss: 1.1231
Time 2017-11-25 20:12:37.502159... Epoch 2/2... Batch 401... Discriminator Loss: 1.3154... Generator Loss: 1.0367

CelebA

Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.

In [97]:
batch_size = 128
z_dim = 100
learning_rate = 1e-4
beta1 = 0.5


"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 25

celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
    train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
          celeba_dataset.shape, celeba_dataset.image_mode)
0
Time 2017-11-25 20:54:04.489219... Epoch 1/25... Batch 101... Discriminator Loss: 0.5882... Generator Loss: 2.3093
Time 2017-11-25 20:54:34.163197... Epoch 1/25... Batch 201... Discriminator Loss: 0.6959... Generator Loss: 1.6672
Time 2017-11-25 20:55:03.974976... Epoch 1/25... Batch 301... Discriminator Loss: 0.7042... Generator Loss: 2.1341
Time 2017-11-25 20:55:33.615327... Epoch 1/25... Batch 401... Discriminator Loss: 0.7383... Generator Loss: 2.3970
Time 2017-11-25 20:56:03.391813... Epoch 1/25... Batch 501... Discriminator Loss: 0.9391... Generator Loss: 1.4033
Time 2017-11-25 20:56:33.033192... Epoch 1/25... Batch 601... Discriminator Loss: 1.1585... Generator Loss: 1.6291
Time 2017-11-25 20:57:03.493518... Epoch 1/25... Batch 701... Discriminator Loss: 1.1246... Generator Loss: 1.0667
Time 2017-11-25 20:57:33.076542... Epoch 1/25... Batch 801... Discriminator Loss: 1.0783... Generator Loss: 1.3533
Time 2017-11-25 20:58:03.203917... Epoch 1/25... Batch 901... Discriminator Loss: 1.2143... Generator Loss: 0.7378
Time 2017-11-25 20:58:33.288080... Epoch 1/25... Batch 1001... Discriminator Loss: 1.2720... Generator Loss: 0.6887
Time 2017-11-25 20:59:03.439073... Epoch 1/25... Batch 1101... Discriminator Loss: 1.1977... Generator Loss: 0.9388
Time 2017-11-25 20:59:33.501985... Epoch 1/25... Batch 1201... Discriminator Loss: 1.0461... Generator Loss: 1.0093
Time 2017-11-25 21:00:02.572600... Epoch 1/25... Batch 1301... Discriminator Loss: 1.2031... Generator Loss: 0.7414
Time 2017-11-25 21:00:32.306654... Epoch 1/25... Batch 1401... Discriminator Loss: 1.0999... Generator Loss: 1.2743
Time 2017-11-25 21:01:01.373404... Epoch 1/25... Batch 1501... Discriminator Loss: 1.2684... Generator Loss: 0.8703
1
Time 2017-11-25 21:01:54.946931... Epoch 2/25... Batch 101... Discriminator Loss: 1.1367... Generator Loss: 1.1505
Time 2017-11-25 21:02:24.624770... Epoch 2/25... Batch 201... Discriminator Loss: 1.1885... Generator Loss: 1.0629
Time 2017-11-25 21:02:54.615338... Epoch 2/25... Batch 301... Discriminator Loss: 1.2293... Generator Loss: 1.2300
Time 2017-11-25 21:03:24.732518... Epoch 2/25... Batch 401... Discriminator Loss: 1.2939... Generator Loss: 0.9647
Time 2017-11-25 21:03:54.551237... Epoch 2/25... Batch 501... Discriminator Loss: 1.2056... Generator Loss: 0.8769
Time 2017-11-25 21:04:24.568584... Epoch 2/25... Batch 601... Discriminator Loss: 1.3554... Generator Loss: 0.7014
Time 2017-11-25 21:04:54.637613... Epoch 2/25... Batch 701... Discriminator Loss: 1.3120... Generator Loss: 0.8201
Time 2017-11-25 21:05:24.485378... Epoch 2/25... Batch 801... Discriminator Loss: 1.2455... Generator Loss: 1.1662
Time 2017-11-25 21:05:54.438117... Epoch 2/25... Batch 901... Discriminator Loss: 1.2139... Generator Loss: 1.1905
Time 2017-11-25 21:06:23.996693... Epoch 2/25... Batch 1001... Discriminator Loss: 1.3124... Generator Loss: 1.0547
Time 2017-11-25 21:06:53.842486... Epoch 2/25... Batch 1101... Discriminator Loss: 1.1990... Generator Loss: 1.1366
Time 2017-11-25 21:07:23.437291... Epoch 2/25... Batch 1201... Discriminator Loss: 1.2690... Generator Loss: 0.9307
Time 2017-11-25 21:07:53.116390... Epoch 2/25... Batch 1301... Discriminator Loss: 1.3635... Generator Loss: 0.8450
Time 2017-11-25 21:08:23.268640... Epoch 2/25... Batch 1401... Discriminator Loss: 1.2713... Generator Loss: 0.9806
Time 2017-11-25 21:08:53.397926... Epoch 2/25... Batch 1501... Discriminator Loss: 1.3251... Generator Loss: 0.8312
2
Time 2017-11-25 21:09:46.976667... Epoch 3/25... Batch 101... Discriminator Loss: 1.3254... Generator Loss: 0.9481
Time 2017-11-25 21:10:16.979280... Epoch 3/25... Batch 201... Discriminator Loss: 1.2502... Generator Loss: 0.8547
Time 2017-11-25 21:10:46.802363... Epoch 3/25... Batch 301... Discriminator Loss: 1.4021... Generator Loss: 0.9372
Time 2017-11-25 21:11:16.960719... Epoch 3/25... Batch 401... Discriminator Loss: 1.3571... Generator Loss: 0.8080
Time 2017-11-25 21:11:47.042712... Epoch 3/25... Batch 501... Discriminator Loss: 1.3240... Generator Loss: 0.8157
Time 2017-11-25 21:12:16.973091... Epoch 3/25... Batch 601... Discriminator Loss: 1.3800... Generator Loss: 0.7414
Time 2017-11-25 21:12:46.547545... Epoch 3/25... Batch 701... Discriminator Loss: 1.2853... Generator Loss: 0.8418
Time 2017-11-25 21:13:16.373014... Epoch 3/25... Batch 801... Discriminator Loss: 1.4884... Generator Loss: 0.7708
Time 2017-11-25 21:13:46.835697... Epoch 3/25... Batch 901... Discriminator Loss: 1.3865... Generator Loss: 0.7874
Time 2017-11-25 21:14:16.901036... Epoch 3/25... Batch 1001... Discriminator Loss: 1.3797... Generator Loss: 0.7322
Time 2017-11-25 21:14:47.073109... Epoch 3/25... Batch 1101... Discriminator Loss: 1.3817... Generator Loss: 1.0733
Time 2017-11-25 21:15:17.203804... Epoch 3/25... Batch 1201... Discriminator Loss: 1.2683... Generator Loss: 0.8369
Time 2017-11-25 21:15:46.954584... Epoch 3/25... Batch 1301... Discriminator Loss: 1.3143... Generator Loss: 0.8743
Time 2017-11-25 21:16:16.633572... Epoch 3/25... Batch 1401... Discriminator Loss: 1.3134... Generator Loss: 0.9461
Time 2017-11-25 21:16:46.712303... Epoch 3/25... Batch 1501... Discriminator Loss: 1.2833... Generator Loss: 0.9016
3
Time 2017-11-25 21:17:41.012387... Epoch 4/25... Batch 101... Discriminator Loss: 1.3428... Generator Loss: 0.8272
Time 2017-11-25 21:18:10.579839... Epoch 4/25... Batch 201... Discriminator Loss: 1.3269... Generator Loss: 0.8903
Time 2017-11-25 21:18:40.116744... Epoch 4/25... Batch 301... Discriminator Loss: 1.2860... Generator Loss: 0.9486
Time 2017-11-25 21:19:09.818765... Epoch 4/25... Batch 401... Discriminator Loss: 1.2968... Generator Loss: 0.8865
Time 2017-11-25 21:19:39.052822... Epoch 4/25... Batch 501... Discriminator Loss: 1.3490... Generator Loss: 0.8049
Time 2017-11-25 21:20:08.497462... Epoch 4/25... Batch 601... Discriminator Loss: 1.3172... Generator Loss: 0.7481
Time 2017-11-25 21:20:37.476301... Epoch 4/25... Batch 701... Discriminator Loss: 1.2723... Generator Loss: 0.8849
Time 2017-11-25 21:21:06.511531... Epoch 4/25... Batch 801... Discriminator Loss: 1.3565... Generator Loss: 0.8825
Time 2017-11-25 21:21:36.083713... Epoch 4/25... Batch 901... Discriminator Loss: 1.2986... Generator Loss: 0.8614
Time 2017-11-25 21:22:05.673187... Epoch 4/25... Batch 1001... Discriminator Loss: 1.4586... Generator Loss: 0.7042
Time 2017-11-25 21:22:35.218540... Epoch 4/25... Batch 1101... Discriminator Loss: 1.3295... Generator Loss: 0.8640
Time 2017-11-25 21:23:05.579732... Epoch 4/25... Batch 1201... Discriminator Loss: 1.3289... Generator Loss: 0.7985
Time 2017-11-25 21:23:35.433574... Epoch 4/25... Batch 1301... Discriminator Loss: 1.3362... Generator Loss: 0.8368
Time 2017-11-25 21:24:05.852115... Epoch 4/25... Batch 1401... Discriminator Loss: 1.3247... Generator Loss: 0.8790
Time 2017-11-25 21:24:35.730045... Epoch 4/25... Batch 1501... Discriminator Loss: 1.2130... Generator Loss: 0.8560
4
Time 2017-11-25 21:25:30.131937... Epoch 5/25... Batch 101... Discriminator Loss: 1.3338... Generator Loss: 0.8718
Time 2017-11-25 21:26:00.802591... Epoch 5/25... Batch 201... Discriminator Loss: 1.3487... Generator Loss: 0.7871
Time 2017-11-25 21:26:30.580670... Epoch 5/25... Batch 301... Discriminator Loss: 1.2752... Generator Loss: 0.9102
Time 2017-11-25 21:27:00.715996... Epoch 5/25... Batch 401... Discriminator Loss: 1.3650... Generator Loss: 0.6977
Time 2017-11-25 21:27:31.144502... Epoch 5/25... Batch 501... Discriminator Loss: 1.3093... Generator Loss: 0.8658
Time 2017-11-25 21:28:01.432834... Epoch 5/25... Batch 601... Discriminator Loss: 1.3169... Generator Loss: 0.7562
Time 2017-11-25 21:28:31.705666... Epoch 5/25... Batch 701... Discriminator Loss: 1.3333... Generator Loss: 0.8293
Time 2017-11-25 21:29:01.835176... Epoch 5/25... Batch 801... Discriminator Loss: 1.3077... Generator Loss: 0.8467
Time 2017-11-25 21:29:32.065661... Epoch 5/25... Batch 901... Discriminator Loss: 1.3630... Generator Loss: 0.9051
Time 2017-11-25 21:30:02.596748... Epoch 5/25... Batch 1001... Discriminator Loss: 1.3976... Generator Loss: 0.9103
Time 2017-11-25 21:30:32.626021... Epoch 5/25... Batch 1101... Discriminator Loss: 1.3361... Generator Loss: 0.9512
Time 2017-11-25 21:31:02.753453... Epoch 5/25... Batch 1201... Discriminator Loss: 1.3373... Generator Loss: 0.9090
Time 2017-11-25 21:31:32.654261... Epoch 5/25... Batch 1301... Discriminator Loss: 1.3848... Generator Loss: 0.8928
Time 2017-11-25 21:32:02.796276... Epoch 5/25... Batch 1401... Discriminator Loss: 1.2958... Generator Loss: 0.8715
Time 2017-11-25 21:32:33.045672... Epoch 5/25... Batch 1501... Discriminator Loss: 1.2874... Generator Loss: 0.8755
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
<ipython-input-97-78e004705ee0> in <module>()
     13 with tf.Graph().as_default():
     14     train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
---> 15           celeba_dataset.shape, celeba_dataset.image_mode)

<ipython-input-89-46231976f1b0> in train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode)
     41                 # Run optimizers
     42                 _ = sess.run(d_train_opt, feed_dict={input_real: 2*batch_images, input_z: batch_z, lr: learning_rate})
---> 43                 _ = sess.run(g_train_opt, feed_dict={input_real: 2*batch_images,input_z: batch_z, lr: learning_rate})
     44 
     45                 if batch_i % 100 == 0:

/usr/lib/python3.6/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
    887     try:
    888       result = self._run(None, fetches, feed_dict, options_ptr,
--> 889                          run_metadata_ptr)
    890       if run_metadata:
    891         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/usr/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
   1118     if final_fetches or final_targets or (handle and feed_dict_tensor):
   1119       results = self._do_run(handle, final_targets, final_fetches,
-> 1120                              feed_dict_tensor, options, run_metadata)
   1121     else:
   1122       results = []

/usr/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
   1315     if handle is None:
   1316       return self._do_call(_run_fn, self._session, feeds, fetches, targets,
-> 1317                            options, run_metadata)
   1318     else:
   1319       return self._do_call(_prun_fn, self._session, handle, feeds, fetches)

/usr/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
   1321   def _do_call(self, fn, *args):
   1322     try:
-> 1323       return fn(*args)
   1324     except errors.OpError as e:
   1325       message = compat.as_text(e.message)

/usr/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
   1300           return tf_session.TF_Run(session, options,
   1301                                    feed_dict, fetch_list, target_list,
-> 1302                                    status, run_metadata)
   1303 
   1304     def _prun_fn(session, handle, feed_dict, fetch_list):

KeyboardInterrupt: 

Submitting This Project

When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.